Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Med Biol Eng Comput ; 62(3): 865-881, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38060101

RESUMEN

Retinal vascular tortuosity is an excessive bending and twisting of the blood vessels in the retina that is associated with numerous health conditions. We propose a novel methodology for the automated assessment of the retinal vascular tortuosity from color fundus images. Our methodology takes into consideration several anatomical factors to weigh the importance of each individual blood vessel. First, we use deep neural networks to produce a robust extraction of the different anatomical structures. Then, the weighting coefficients that are required for the integration of the different anatomical factors are adjusted using evolutionary computation. Finally, the proposed methodology also provides visual representations that explain the contribution of each individual blood vessel to the predicted tortuosity, hence allowing us to understand the decisions of the model. We validate our proposal in a dataset of color fundus images providing a consensus ground truth as well as the annotations of five clinical experts. Our proposal outperforms previous automated methods and offers a performance that is comparable to that of the clinical experts. Therefore, our methodology demonstrates to be a viable alternative for the assessment of the retinal vascular tortuosity. This could facilitate the use of this biomarker in clinical practice and medical research.


Asunto(s)
Inteligencia Artificial , Enfermedades de la Retina , Humanos , Vasos Retinianos/diagnóstico por imagen , Retina , Fondo de Ojo , Algoritmos
2.
Neural Netw ; 170: 254-265, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37995547

RESUMEN

Multi-task learning is a promising paradigm to leverage task interrelations during the training of deep neural networks. A key challenge in the training of multi-task networks is to adequately balance the complementary supervisory signals of multiple tasks. In that regard, although several task-balancing approaches have been proposed, they are usually limited by the use of per-task weighting schemes and do not completely address the uneven contribution of the different tasks to the network training. In contrast to classical approaches, we propose a novel Multi-Adaptive Optimization (MAO) strategy that dynamically adjusts the contribution of each task to the training of each individual parameter in the network. This automatically produces a balanced learning across tasks and across parameters, throughout the whole training and for any number of tasks. To validate our proposal, we perform comparative experiments on real-world datasets for computer vision, considering different experimental settings. These experiments allow us to analyze the performance obtained in several multi-task scenarios along with the learning balance across tasks, network layers and training steps. The results demonstrate that MAO outperforms previous task-balancing alternatives. Additionally, the performed analyses provide insights that allow us to comprehend the advantages of this novel approach for multi-task learning.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Monoaminooxidasa
3.
Quant Imaging Med Surg ; 13(7): 4540-4562, 2023 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-37456305

RESUMEN

Background: Retinal imaging is widely used to diagnose many diseases, both systemic and eye-specific. In these cases, image registration, which is the process of aligning images taken from different viewpoints or moments in time, is fundamental to compare different images and to assess changes in their appearance, commonly caused by disease progression. Currently, the field of color fundus registration is dominated by classical methods, as deep learning alternatives have not shown sufficient improvement over classic methods to justify the added computational cost. However, deep learning registration methods are still considered beneficial as they can be easily adapted to different modalities and devices following a data-driven learning approach. Methods: In this work, we propose a novel methodology to register color fundus images using deep learning for the joint detection and description of keypoints. In particular, we use an unsupervised neural network trained to obtain repeatable keypoints and reliable descriptors. These keypoints and descriptors allow to produce an accurate registration using RANdom SAmple Consensus (RANSAC). We train the method using the Messidor dataset and test it with the Fundus Image Registration Dataset (FIRE) dataset, both of which are publicly accessible. Results: Our work demonstrates a color fundus registration method that is robust to changes in imaging devices and capture conditions. Moreover, we conduct multiple experiments exploring several of the method's parameters to assess their impact on the registration performance. The method obtained an overall Registration Score of 0.695 for the whole FIRE dataset (0.925 for category S, 0.352 for P, and 0.726 for A). Conclusions: Our proposal improves the results of previous deep learning methods in every category and surpasses the performance of classical approaches in category A which has disease progression and thus represents the most relevant scenario for clinical practice as registration is commonly used in patients with diseases for disease monitoring purposes.

4.
Comput Biol Med ; 152: 106451, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36571941

RESUMEN

During the last years, deep learning techniques have emerged as powerful alternatives to solve biomedical image analysis problems. However, the training of deep neural networks usually needs great amounts of labeled data to be done effectively. This is even more critical in the case of biomedical imaging due to the added difficulty of obtaining data labeled by experienced clinicians. To mitigate the impact of data scarcity, one of the most commonly used strategies is transfer learning. Nevertheless, the success of this approach depends on the effectiveness of the available pre-training techniques for learning from little or no labeled data. In this work, we explore the application of the Context Encoder paradigm for transfer learning in the domain of retinal image analysis. To this aim, we propose several approaches that allow to work with full resolution images and improve the recognition of the retinal structures. In order to validate the proposals, the Context Encoder pre-trained models are fine-tuned to perform two relevant tasks in the domain: vessels segmentation and fovea localization. The experiments performed on different public datasets demonstrate that the proposed Context Encoder approaches allow mitigating the impact of data scarcity, being superior to previous alternatives in this domain.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Diagnóstico por Imagen , Retina/diagnóstico por imagen , Aprendizaje Automático
5.
Comput Methods Programs Biomed ; 229: 107296, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36481530

RESUMEN

BACKGROUND AND OBJECTIVES: Age-related macular degeneration (AMD) is a degenerative disorder affecting the macula, a key area of the retina for visual acuity. Nowadays, AMD is the most frequent cause of blindness in developed countries. Although some promising treatments have been proposed that effectively slow down its development, their effectiveness significantly diminishes in the advanced stages. This emphasizes the importance of large-scale screening programs for early detection. Nevertheless, implementing such programs for a disease like AMD is usually unfeasible, since the population at risk is large and the diagnosis is challenging. For the characterization of the disease, clinicians have to identify and localize certain retinal lesions. All this motivates the development of automatic diagnostic methods. In this sense, several works have achieved highly positive results for AMD detection using convolutional neural networks (CNNs). However, none of them incorporates explainability mechanisms linking the diagnosis to its related lesions to help clinicians to better understand the decisions of the models. This is specially relevant, since the absence of such mechanisms limits the application of automatic methods in the clinical practice. In that regard, we propose an explainable deep learning approach for the diagnosis of AMD via the joint identification of its associated retinal lesions. METHODS: In our proposal, a CNN with a custom architectural setting is trained end-to-end for the joint identification of AMD and its associated retinal lesions. With the proposed setting, the lesion identification is directly derived from independent lesion activation maps; then, the diagnosis is obtained from the identified lesions. The training is performed end-to-end using image-level labels. Thus, lesion-specific activation maps are learned in a weakly-supervised manner. The provided lesion information is of high clinical interest, as it allows clinicians to assess the developmental stage of the disease. Additionally, the proposed approach allows to explain the diagnosis obtained by the models directly from the identified lesions and their corresponding activation maps. The training data necessary for the approach can be obtained without much extra work on the part of clinicians, since the lesion information is habitually present in medical records. This is an important advantage over other methods, including fully-supervised lesion segmentation methods, which require pixel-level labels whose acquisition is arduous. RESULTS: The experiments conducted in 4 different datasets demonstrate that the proposed approach is able to identify AMD and its associated lesions with satisfactory performance. Moreover, the evaluation of the lesion activation maps shows that the models trained using the proposed approach are able to identify the pathological areas within the image and, in most cases, to correctly determine to which lesion they correspond. CONCLUSIONS: The proposed approach provides meaningful information-lesion identification and lesion activation maps-that conveniently explains and complements the diagnosis, and is of particular interest to clinicians for the diagnostic process. Moreover, the data needed to train the networks using the proposed approach is commonly easy to obtain, what represents an important advantage in fields with particularly scarce data, such as medical imaging.


Asunto(s)
Aprendizaje Profundo , Degeneración Macular , Humanos , Fondo de Ojo , Degeneración Macular/diagnóstico por imagen , Redes Neurales de la Computación , Retina/diagnóstico por imagen
6.
Comput Biol Med ; 143: 105302, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35219187

RESUMEN

Diabetic retinopathy is an increasingly prevalent eye disorder that can lead to severe vision impairment. The severity grading of the disease using retinal images is key to provide an adequate treatment. However, in order to learn the diverse patterns and complex relations that are required for the grading, deep neural networks require very large annotated datasets that are not always available. This has been typically addressed by reusing networks that were pre-trained for natural image classification, hence relying on additional annotated data from a different domain. In contrast, we propose a novel pre-training approach that takes advantage of unlabeled multimodal visual data commonly available in ophthalmology. The use of multimodal visual data for pre-training purposes has been previously explored by training a network in the prediction of one image modality from another. However, that approach does not ensure a broad understanding of the retinal images, given that the network may exclusively focus on the similarities between modalities while ignoring the differences. Thus, we propose a novel self-supervised pre-training that explicitly teaches the networks to learn the common characteristics between modalities as well as the characteristics that are exclusive to the input modality. This provides a complete comprehension of the input domain and facilitates the training of downstream tasks that require a broad understanding of the retinal images, such as the grading of diabetic retinopathy. To validate and analyze the proposed approach, we performed an exhaustive experimentation on different public datasets. The transfer learning performance for the grading of diabetic retinopathy is evaluated under different settings while also comparing against previous state-of-the-art pre-training approaches. Additionally, a comparison against relevant state-of-the-art works for the detection and grading of diabetic retinopathy is also provided. The results show a satisfactory performance of the proposed approach, which outperforms previous pre-training alternatives in the grading of diabetic retinopathy.

7.
Comput Biol Med ; 140: 105101, 2021 Dec 03.
Artículo en Inglés | MEDLINE | ID: mdl-34875412

RESUMEN

Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.

8.
Artif Intell Med ; 118: 102116, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34412839

RESUMEN

BACKGROUND AND OBJECTIVES: The study of the retinal vasculature represents a fundamental stage in the screening and diagnosis of many high-incidence diseases, both systemic and ophthalmic. A complete retinal vascular analysis requires the segmentation of the vascular tree along with the classification of the blood vessels into arteries and veins. Early automatic methods approach these complementary segmentation and classification tasks in two sequential stages. However, currently, these two tasks are approached as a joint semantic segmentation, because the classification results highly depend on the effectiveness of the vessel segmentation. In that regard, we propose a novel approach for the simultaneous segmentation and classification of the retinal arteries and veins from eye fundus images. METHODS: We propose a novel method that, unlike previous approaches, and thanks to the proposal of a novel loss, decomposes the joint task into three segmentation problems targeting arteries, veins and the whole vascular tree. This configuration allows to handle vessel crossings intuitively and directly provides accurate segmentation masks of the different target vascular trees. RESULTS: The provided ablation study on the public Retinal Images vessel Tree Extraction (RITE) dataset demonstrates that the proposed method provides a satisfactory performance, particularly in the segmentation of the different structures. Furthermore, the comparison with the state of the art shows that our method achieves highly competitive results in the artery/vein classification, while significantly improving the vascular segmentation. CONCLUSIONS: The proposed multi-segmentation method allows to detect more vessels and better segment the different structures, while achieving a competitive classification performance. Also, in these terms, our approach outperforms the approaches of various reference works. Moreover, in contrast with previous approaches, the proposed method allows to directly detect the vessel crossings, as well as preserving the continuity of both arteries and veins at these complex locations.


Asunto(s)
Arteria Retiniana , Algoritmos , Fondo de Ojo , Arteria Retiniana/diagnóstico por imagen , Vasos Retinianos/diagnóstico por imagen
9.
Comput Methods Programs Biomed ; 186: 105201, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31783244

RESUMEN

BACKGROUND AND OBJECTIVES: The analysis of the retinal vasculature plays an important role in the diagnosis of many ocular and systemic diseases. In this context, the accurate detection of the vessel crossings and bifurcations is an important requirement for the automated extraction of relevant biomarkers. In that regard, we propose a novel approach that addresses the simultaneous detection of vessel crossings and bifurcations in eye fundus images. METHOD: We propose to formulate the detection of vessel crossings and bifurcations in eye fundus images as a multi-instance heatmap regression. In particular, a deep neural network is trained in the prediction of multi-instance heatmaps that model the likelihood of a pixel being a landmark location. This novel approach allows to make predictions using full images and integrates into a single step the detection and distinction of the vascular landmarks. RESULTS: The proposed method is validated on two public datasets of reference that include detailed annotations for vessel crossings and bifurcations in eye fundus images. The conducted experiments evidence that the proposed method offers a satisfactory performance. In particular, the proposed method achieves 74.23% and 70.90% F-score for the detection of crossings and bifurcations, respectively, in color fundus images. Furthermore, the proposed method outperforms previous works by a significant margin. CONCLUSIONS: The proposed multi-instance heatmap regression allows to successfully exploit the potential of modern deep learning algorithms for the simultaneous detection of retinal vessel crossings and bifurcations. Consequently, this results in a significant improvement over previous methods, which will further facilitate the automated analysis of the retinal vasculature in many pathological conditions.


Asunto(s)
Fondo de Ojo , Calor , Vasos Retinianos/diagnóstico por imagen , Algoritmos , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...